77 research outputs found

    Labels direct infants’ attention to commonalities during novel category learning

    Get PDF
    Recent studies have provided evidence that labeling can influence the outcome of infants’ visual categorization. However, what exactly happens during learning remains unclear. Using eye-tracking, we examined infants’ attention to object parts during learning. Our analysis of looking behaviors during learning provide insights going beyond merely observing the learning outcome. Both labeling and non-labeling phrases facilitated category formation in 12-month-olds but not 8-month-olds (Experiment 1). Non-linguistic sounds did not produce this effect (Experiment 2). Detailed analyses of infants’ looking patterns during learning revealed that only infants who heard labels exhibited a rapid focus on the object part successive exemplars had in common. Although other linguistic stimuli may also be beneficial for learning, it is therefore concluded that labels have a unique impact on categorization

    Multi-level evidence of an allelic hierarchy of USH2A variants in hearing, auditory processing and speech/language outcomes.

    Get PDF
    Language development builds upon a complex network of interacting subservient systems. It therefore follows that variations in, and subclinical disruptions of, these systems may have secondary effects on emergent language. In this paper, we consider the relationship between genetic variants, hearing, auditory processing and language development. We employ whole genome sequencing in a discovery family to target association and gene x environment interaction analyses in two large population cohorts; the Avon Longitudinal Study of Parents and Children (ALSPAC) and UK10K. These investigations indicate that USH2A variants are associated with altered low-frequency sound perception which, in turn, increases the risk of developmental language disorder. We further show that Ush2a heterozygote mice have low-level hearing impairments, persistent higher-order acoustic processing deficits and altered vocalizations. These findings provide new insights into the complexity of genetic mechanisms serving language development and disorders and the relationships between developmental auditory and neural systems

    Infant Rule Learning: Advantage Language, or Advantage Speech?

    Get PDF
    <div><p>Infants appear to learn abstract rule-like regularities (e.g., <em>la la da</em> follows an AAB pattern) more easily from speech than from a variety of other auditory and visual stimuli (Marcus et al., 2007). We test if that facilitation reflects a specialization to learn from speech alone, or from modality-independent communicative stimuli more generally, by measuring 7.5-month-old infants’ ability to learn abstract rules from sign language-like gestures. Whereas infants appear to easily learn many different rules from speech, we found that with sign-like stimuli, and under circumstances comparable to those of Marcus et al. (1999), hearing infants were able to learn an ABB rule, but not an AAB rule. This is consistent with results of studies that demonstrate lower levels of infant rule learning from a variety of other non-speech stimuli, and we discuss implications for accounts of speech-facilitation.</p> </div

    Cross-Modal Transfer of Statistical Information Benefits from Sleep

    Get PDF
    Extracting regularities from a sequence of events is essential for understanding our environment. However, there is no consensus regarding the extent to which such regularities can be generalised beyond the modality of learning. One reason for this could be the variation in consolidation intervals used in different paradigms, also including an opportunity to sleep. Using a novel statistical learning paradigm in which structured information is acquired in the auditory domain and tested in the visual domain over either 30min or 24hr consolidation intervals, we show that cross-modal transfer can occur, but this transfer is only seen in the 24hr group. Importantly, the extent of cross-modal transfer is predicted by the amount of SWS obtained. Additionally, cross-modal transfer is associated with the same pattern of decreasing MTL and increasing striatal involvement which has previously been observed to occur across 24 hours in unimodal statistical learning. We also observed enhanced functional connectivity after 24 hours in a network of areas which have been implicated in cross-modal integration including the precuneus and the middle occipital gyrus. Finally, functional connectivity between the striatum and the precuneus was also enhanced, and this strengthening was predicted by SWS. These results demonstrate that statistical learning can generalise to some extent beyond the modality of acquisition, and together with our previously published unimodal results, support the notion that statistical learning is both domain-general and domain-specific

    Cues for Early Social Skills: Direct Gaze Modulates Newborns' Recognition of Talking Faces

    Get PDF
    Previous studies showed that, from birth, speech and eye gaze are two important cues in guiding early face processing and social cognition. These studies tested the role of each cue independently; however, infants normally perceive speech and eye gaze together. Using a familiarization-test procedure, we first familiarized newborn infants (n = 24) with videos of unfamiliar talking faces with either direct gaze or averted gaze. Newborns were then tested with photographs of the previously seen face and of a new one. The newborns looked longer at the face that previously talked to them, but only in the direct gaze condition. These results highlight the importance of both speech and eye gaze as socio-communicative cues by which infants identify others. They suggest that gaze and infant-directed speech, experienced together, are powerful cues for the development of early social skills

    Integration of Consonant and Pitch Processing as Revealed by the Absence of Additivity in Mismatch Negativity

    Get PDF
    Consonants, unlike vowels, are thought to be speech specific and therefore no interactions would be expected between consonants and pitch, a basic element for musical tones. The present study used an electrophysiological approach to investigate whether, contrary to this view, there is integrative processing of consonants and pitch by measuring additivity of changes in the mismatch negativity (MMN) of evoked potentials. The MMN is elicited by discriminable variations occurring in a sequence of repetitive, homogeneous sounds. In the experiment, event-related potentials (ERPs) were recorded while participants heard frequently sung consonant-vowel syllables and rare stimuli deviating in either consonant identity only, pitch only, or in both dimensions. Every type of deviation elicited a reliable MMN. As expected, the two single-deviant MMNs had similar amplitudes, but that of the double-deviant MMN was also not significantly different from them. This absence of additivity in the double-deviant MMN suggests that consonant and pitch variations are processed, at least at a pre-attentive level, in an integrated rather than independent way. Domain-specificity of consonants may depend on higher-level processes in the hierarchy of speech perception

    Does Sleep Improve Your Grammar? : Preferential Consolidation of Arbitrary Components of New Linguistic Knowledge

    Get PDF
    We examined the role of sleep-related memory consolidation processes in learning new form-meaning mappings. Specifically, we examined a Complementary Learning Systems account, which implies that sleep-related consolidation should be more beneficial for new hippocampally dependent arbitrary mappings (e.g. new vocabulary items) relative to new systematic mappings (e.g. grammatical regularities), which can be better encoded neocortically. The hypothesis was tested using a novel language with an artificial grammatical gender system. Stem-referent mappings implemented arbitrary aspects of the new language, and determiner/suffix+natural gender mappings implemented systematic aspects (e.g. tib scoiffesh + ballerina, tib mofeem + bride; ked jorool + cowboy, ked heefaff + priest). Importantly, the determiner-gender and the suffix-gender mappings varied in complexity and salience, thus providing a range of opportunities to detect beneficial effects of sleep for this type of mapping. Participants were trained on the new language using a word-picture matching task, and were tested after a 2-hour delay which included sleep or wakefulness. Participants in the sleep group outperformed participants in the wake group on tests assessing memory for the arbitrary aspects of the new mappings (individual vocabulary items), whereas we saw no evidence of a sleep benefit in any of the tests assessing memory for the systematic aspects of the new mappings: Participants in both groups extracted the salient determiner-natural gender mapping, but not the more complex suffix-natural gender mapping. The data support the predictions of the complementary systems account and highlight the importance of the arbitrariness/systematicity dimension in the consolidation process for declarative memories
    • …
    corecore